wrong thing
The AI backlash is here. It's focused on the wrong things.
Meanwhile, we already have examples of companies so eager to cut corners that they're automating tasks the AI can't handle -- like the tech site CNET autogenerating error-ridden financial articles. And when AI goes awry, the effects are likely to be felt disproportionately by the already marginalized. For all the excitement around ChatGPT and its ilk, the makers of today's large language models haven't solved the problem of biased data sets that have already embedded racist assumptions into AI applications such as face recognition and criminal risk-assessment algorithms. Last week brought another example of a Black man being wrongly jailed because of a faulty facial recognition match.
AI Chatbots Don't Care About Your Social Norms - NOEMA
Jacob Browning is a postdoc in NYU's Computer Science Department working on the philosophy of AI. Yann LeCun is a Turing Award-winning machine learning researcher, an NYU professor and the chief AI scientist at Meta. With artificial intelligence now powering Microsoft's Bing and Google's Bard search engines, brilliant and clever conversational AI is at our fingertips. But there have been many uncanny moments -- including casually delivered disturbing comments like calling a reporter ugly, declaring love for strangers or rattling off plans for taking over the world. To make sense of these bizarre moments, it's helpful to start by thinking about the phenomenon of saying the wrong thing.
Ethics and Policy for Technology -- Joanna Bryson
Artificial Intelligence (AI) and robots often seem like fun science fiction, but in fact already affect our daily lives. For example, services like Google and Amazon help us find what we want by using AI. Every aspect of how Facebook works is based on AI and Machine Learning (ML). The reason your phone is so useful is it is full of AI -- sensing, acting, and learning about you. All these tools not only make us smarter, their intelligence is based partly on what they learn both from us and about us when we use them.
- Government (0.31)
- Education (0.31)
- Law (0.30)
Silicon Valley's Anti-Autonomy Backlash Is Afraid Of The Wrong Things
Humans are good at a lot of things, but when it comes to assessing risk in the modern world we have some serious limitations. It's not uncommon to be plagued with fear and anxiety while flying, for example, but the same people who quake at the thought of trusting their life to an airliner will often treat the far more dangerous task of driving with baffling nonchalance. It should be no surprise then, that people are also wildly off the mark when it comes to assessing the risks presented by public road testing of autonomous vehicles. This misperception of risk is dramatically illustrated in a recent story by Washington Post reporter Faiz Siddiqui, which uncovers a kind of NIMBY (Not In My Back Yard) backlash against AVs in the heart of Silicon Valley. Siddiqui spoke with a number of Valley residents, most of whom work in the tech sector and believe in the long term potential of self-driving cars, who object to being what one terms "the guinea pig" for this new technology.
- North America > United States > California (0.61)
- North America > United States > Arizona > Maricopa County > Tempe (0.05)
- Information Technology (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.96)
How Much Artificial Intelligence Should There Be in the Classroom? - EdSurge News
We can build robot teachers, or even robot teaching assistants. And if the answer is yes, what's the right mix of human and machine in the classroom? To get a fresh perspective on that question, this episode we take you to China, where a couple of us from EdSurge recently traveled for a reporting trip. One of the events we attended was a two-day conference about artificial intelligence in education organized by a company called Squirrel AI. It's vision felt unusually utopian.
- Asia > China (0.39)
- North America > United States (0.05)
AI in the Workplace: We're Measuring the Wrong Things
Good science fiction excels at tapping contemporary anxieties to forecast the future fate of humanity. Consider the list of workforce automation fears showcased in the recent "Kerblam" episode of the TV series Doctor Who: Robotized megacorporations, computer-controlled commerce, ubiquitous unemployment, human irrelevance and destructive despair. There's no spoiler in sharing that -- aside from some teleportation and a sonic screwdriver -- everything in the story line is already pretty plausible. The Doctor may be sci-fi fantasy, but the issues are real. Artificial intelligence (AI) technology is already reshaping many manufacturing and service industries, and it is rapidly disrupting other sectors as well -- for both good and ill.
- Education (0.49)
- Banking & Finance (0.35)
Using machine learning to improve dialog flow in conversational applications
In this episode of the Data Show, I spoke with Alan Nichol, co-founder and CTO of Rasa, a startup that builds open source tools to help developers and product teams build conversational applications. About 18 months ago, there was tremendous excitement and hype surrounding chatbots, and while things have quieted lately, companies and developers continue to refine and define tools for building conversational applications. We spoke about the current state of chatbots, specifically about the types of applications developers are building today and how he sees conversational applications evolving in the near future. As I described in a recent post, workflow automation will happen in stages. With that in mind, chatbots and intelligent assistants are bound to improve as underlying algorithms, technologies, and training data get better.
You weren't supposed to actually implement it, Google
Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial". The post got a fair amount of reaction. But eventually I heard from some detractors. Of course there were the fully expected "I'm not racist but what if racism is correct" retorts that I knew I'd have to face.
It's Time To Recognize That Machines Are Learning All The Wrong Things
Data-driven algorithms govern many aspects of life: university admissions, resume screening, and a person's ability to get a car or home loan. Often, using data leads to more efficient allocation of resources and better outcomes for everyone. But algorithms can come with unintended consequences--and without care, their application can result in a society we don't want. Typically, we think of algorithms as being neutral and objective, but when software is written and trained by humans, it often encodes the biases and prejudices of the people that make and shape it. Ultimately, the biases built into algorithms can be racist and marginalize low-ranking socioeconomic groups.
- Law (1.00)
- Information Technology > Services (0.30)
Google DeepMind Researchers Develop AI Kill Switch
Artificial intelligence doesn't have to include murderous, sentient super-intelligence to be dangerous. If a machine can learn based on real-world inputs and adjust its behaviors accordingly, there exists the potential for that machine to learn the wrong thing. If a machine can learn the wrong thing, it can do the wrong thing. Laurent Orseau and Stuart Armstrong, researchers at Google's DeepMind and the Future of Humanity Institute, respectively, have developed a new framework to address this in the form of "safely interruptible" artificial intelligence. In other words, their system, which is described in a paper to be presented at the 32nd Conference on Uncertainty in Artificial Intelligence, guarantees that a machine will not learn to resist attempts by humans to intervene in the its learning processes.